Linearly Constrained Nonsmooth Optimization for Training Autoencoders

نویسندگان

چکیده

A regularized minimization model with $l_1$-norm penalty (RP) is introduced for training the autoencoders that belong to a class of two-layer neural networks. We show RP can act as an exact which shares same global minimizers, local and d(irectional)-stationary points original under mild conditions. construct bounded box region contains at least one minimizer RP, propose linearly constrained (LRP) autoencoders. smoothing proximal gradient algorithm designed solve LRP. Convergence generalized d-stationary point LRP delivered. Comprehensive numerical experiments convincingly illustrate efficiency well robustness proposed algorithm.

برای دانلود باید عضویت طلایی داشته باشید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

A Derivative-free Method for Linearly Constrained Nonsmooth Optimization

This paper develops a new derivative-free method for solving linearly constrained nonsmooth optimization problems. The objective functions in these problems are, in general, non-regular locally Lipschitz continuous function. The computation of generalized subgradients of such functions is difficult task. In this paper we suggest an algorithm for the computation of subgradients of a broad class ...

متن کامل

Linearly Constrained Nonsmooth and Nonconvex Minimization

Motivated by variational models in continuum mechanics, we introduce a novel algorithm for performing nonsmooth and nonconvex minimizations with linear constraints. We show how this algorithm is actually a natural generalization of well-known non-stationary augmented Lagrangian methods for convex optimization. The relevant features of this approach are its applicability to a large variety of no...

متن کامل

Spectral gradient methods for linearly constrained optimization

Linearly constrained optimization problems with simple bounds are considered in the present work. First, a preconditioned spectral gradient method is defined for the case in which no simple bounds are present. This algorithm can be viewed as a quasiNewton method in which the approximate Hessians satisfy a weak secant equation. The spectral choice of steplength is embedded into the Hessian appro...

متن کامل

Large-scale linearly constrained optimization

An algorithm for solving large-scale nonlinear' programs with linear constraints is presented. The method combines efficient sparse-matrix techniques as in the revised simplex method with stable quasi-Newton methods for handling the nonlinearities. A general-purpose production code (MINOS) is described, along with computational experience on a wide variety of problems.

متن کامل

Frames and Grids in Unconstrained and Linearly Constrained Optimization: A Nonsmooth Approach

This paper describes a class of frame-based direct search methods for unconstrained and linearly constrained optimization. A template is described and analyzed using Clarke’s nonsmooth calculus. This provides a unified and simple approach to earlier results for gridand frame-based methods, and also provides partial convergence results when the objective function is not smooth, undefined in some...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: Siam Journal on Optimization

سال: 2022

ISSN: ['1095-7189', '1052-6234']

DOI: https://doi.org/10.1137/21m1408713